Goto

Collaborating Authors

 whistleblower protection


How California's New AI Law Protects Whistleblowers

TIME - Tech

Booth is a reporter at TIME. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Booth is a reporter at TIME. CEOs of the companies racing to build smarter AI--Google DeepMind, OpenAI, xAI, and Anthropic--have been clear about the stakes.


The UN's AI warnings grow louder

TIME - Tech

The UN's AI warnings grow louder Welcome back to In the Loop, new twice-weekly newsletter about AI. It was a busy week for our team: Tharin Pillay was on site during the UN General Assembly in New York, while Harry Booth and Nikita Ostrovsky were at the "All In AI" event in Montreal. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? The United Nations General Assembly met this week in New York. While the assembly members spent much of their time on the crises in Palestine and Sudan, they also devoted a good chunk to AI.


'The Stakes Are Incredibly High.' Two Former OpenAI Employees On the Need for Whistleblower Protections

TIME - Tech

This could be a costly interview for William Saunders. The former safety researcher resigned from OpenAI in February, and--like many other departing employees--signed a non-disparagement agreement in order to keep the right to sell his equity in the company. Although he says OpenAI has since told him that it does not intend to enforce the agreement, and has made similar public commitments, he is still taking a risk by speaking out. "By speaking to you I might never be able to access vested equity worth millions of dollars," he tells TIME. "But I think it's more important to have a public dialogue about what is happening at these AGI companies."


Former OpenAI, Google and Anthropic workers are asking AI companies for more whistleblower protections

Engadget

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic have signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public," the letter, which was published on Tuesday, says. "Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues." The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman called the provision "genuinely embarrassing" and claims it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees. The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo.


Ex-Google researcher: AI workers need whistleblower protection

#artificialintelligence

Artificial intelligence research leads to new cutting-edge technologies, but it's expensive. Big Tech companies, which are powered by AI and have deep pockets, often take on this work -- but that gives them the power to censor or impede research that casts them in an unfavorable light, according to Timnit Gebru, a computer scientist, co-founder of the nonprofit organization Black in AI and the former co-leader of Google's Ethical AI team. The situation imperils both the rights of AI workers at those companies and the quality of research that is shared with the public, said Gebru, speaking at the recent EmTech MIT conference hosted by MIT Technology Review. "It's all the incentive structures that are not in place for you to challenge the status quo," she said. Gebru was forced out at Google last December (Gebru said she was fired, while Google said she resigned) after co-writing a paper about the risks of large AI language models, such as environmental impacts and the difficulty in finding embedded biases.


Google employee group urges Congress to strengthen whistleblower protections for AI researchers

#artificialintelligence

Google's decision to fire its AI ethics leaders is a matter of "urgent public concern" that merits strengthening laws to protect AI researchers and tech workers who want to act as whistleblowers. That's according to a letter published by Google employees today in support of the Ethical AI team at Google and former co-leads Margaret Mitchell and Timnit Gebru, who Google fired two weeks ago and in December 2020, respectively. Firing Gebru, one of the best known Black female AI researchers in the world and one of few Black women at Google, drew public opposition from thousands of Google employees. It also led critics to claim the incident may have "shattered" Google's Black talent pipeline and signaled the collapse of AI ethics research in corporate environments. "We must stand up together now, or the precedent we set for the field -- for the integrity of our own research and for our ability to check the power of big tech -- bodes a grim future for us all," reads the letter published by the group Google Walkout for Change.


From whistleblower laws to unions: How Google's AI ethics meltdown could shape policy

#artificialintelligence

It's been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models. Of course, this incident didn't happen in a vacuum. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru's dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history. In an interview with VentureBeat last week, Gebru called the way she was fired disrespectful and described a companywide memo sent by CEO Sundar Pichai as "dehumanizing." To delve further into possible outcomes following Google's AI ethics meltdown, VentureBeat spoke with five experts in the field about Gebru's dismissal and the issues it raises.